skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Lauriere, Mathieu"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. A traffic system can be interpreted as a multiagent system, wherein vehicles choose the most efficient driving approaches guided by interconnected goals or strategies. This paper aims to develop a family of mean field games (MFG) for generic second-order traffic flow models (GSOM), in which cars control individual velocity to optimize their objective functions. GSOMs do not generally assume that cars optimize self-interested objectives, so such a game-theoretic reinterpretation offers insights into the agents’ underlying behaviors. In general, an MFG allows one to model individuals on a microscopic level as rational utility-optimizing agents while translating rich microscopic behaviors to macroscopic models. Building on the MFG framework, we devise a new class of second-order traffic flow MFGs (i.e., GSOM-MFG), which control cars’ acceleration to ensure smooth velocity change. A fixed-point algorithm with fictitious play technique is developed to solve GSOM-MFG numerically. In numerical examples, different traffic patterns are presented under different cost functions. For real-world validation, we further use an inverse reinforcement learning approach (IRL) to uncover the underlying cost function on the next-generation simulation (NGSIM) data set. We formulate the problem of inferring cost functions as a min-max game and use an apprenticeship learning algorithm to solve for cost function coefficients. The results show that our proposed GSOM-MFG is a generic framework that can accommodate various cost functions. The Aw Rascle and Zhang (ARZ) and Light-Whitham-Richards (LWR) fundamental diagrams in traffic flow models belong to our GSOM-MFG when costs are specified. History: This paper has been accepted for the Transportation Science Special Issue on ISTTT25 Conference. Funding: X. Di is supported by the National Science Foundation [CAREER Award CMMI-1943998]. E. Iacomini is partially supported by the Italian Research Center on High-Performance Computing, Big Data and Quantum Computing (ICSC) funded by MUR Missione 4-Next Generation EU (NGEU) [Spoke 1 “FutureHPC & BigData”]. C. Segala and M. Herty thank the Deutsche Forschungsgemeinschaft (DFG) for financial support [Grants 320021702/GRK2326, 333849990/IRTG-2379, B04, B05, and B06 of 442047500/SFB1481, HE5386/18-1,19-2,22-1,23-1,25-1, ERS SFDdM035; Germany’s Excellence Strategy EXC-2023 Internet of Production 390621612; and Excellence Strategy of the Federal Government and the Länder]. Support through the EU DATAHYKING is also acknowledged. This work was also funded by the DFG [TRR 154, Mathematical Modelling, Simulation and Optimization Using the Example of Gas Networks, Projects C03 and C05, Project No. 239904186]. Moreover, E. Iacomini and C. Segala are members of the Indam GNCS (Italian National Group of Scientific Calculus). 
    more » « less
    Free, publicly-accessible full text available November 1, 2025
  2. We develop a general reinforcement learning framework for mean field control (MFC) problems. Such problems arise for instance as the limit of collaborative multi-agent control problems when the number of agents is very large. The asymptotic problem can be phrased as the optimal control of a non-linear dynamics. This can also be viewed as a Markov decision process (MDP) but the key difference with the usual RL setup is that the dynamics and the reward now depend on the state's probability distribution itself. Alternatively, it can be recast as a MDP on the Wasserstein space of measures. In this work, we introduce generic model-free algorithms based on the state-action value function at the mean field level and we prove convergence for a prototypical Q-learning method. We then implement an actor-critic method and report numerical results on two archetypal problems: a finite space model motivated by a cyber security application and a continuous space model motivated by an application to swarm motion. 
    more » « less
  3. We investigate reinforcement learning for mean field control problems in discrete time, which can be viewed as Markov decision processes for a large number of exchangeable agents interacting in a mean field manner. Such problems arise, for instance when a large number of robots communicate through a central unit dispatching the optimal policy computed by minimizing the overall social cost. An approximate solution is obtained by learning the optimal policy of a generic agent interacting with the statistical distribution of the states of the other agents. We prove rigorously the convergence of exact and model-free policy gradient methods in a mean-field linear-quadratic setting. We also provide graphical evidence of the convergence based on implementations of our algorithms. 
    more » « less